从许多科目中,从一系列文本中提取频繁的单词都在很大程度上进行。另一方面,提取短语通常是由于提取短语时固有的并发症而进行的,最重要的并发症是双计数的并发症,当单词或短语出现在较长的短语中时,它们也被计算在内。已经写了几篇关于这一问题解决方案的短语挖掘的论文。但是,他们要么需要一个所谓的质量短语列表,要么可以用于提取过程,要么需要人类的互动来在此过程中识别这些质量短语。我们提出了一种消除双重计数的方法,而无需识别质量短语列表。在一组文本的上下文中,我们将主短语定义为不交叉标点标记的短语,不以停止词开头用停止单词,在这些文本中经常出现,而无需双重计数,并且对用户有意义。我们的方法可以独立地识别这种主短语而无需人类投入,并可以从任何文本中提取。已经开发了一个称为PHM的R软件包,以实现此方法。
translated by 谷歌翻译
Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks. Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline. We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena. We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline.
translated by 谷歌翻译
Large-scale models combining text and images have made incredible progress in recent years. However, they can still fail at tasks requiring compositional knowledge, such as correctly picking out a red cube from a picture of multiple shapes. We examine the ability of CLIP (Radford et al., 2021), to caption images requiring compositional knowledge. We implement five compositional language models to probe the kinds of structure that CLIP may be using, and develop a novel training algorithm, Compositional Skipgram for Images (CoSI), to train these models. We look at performance in attribute-based tasks, requiring the identification of a particular combination of attribute and object (such as "red cube"), and in relational settings, where the spatial relation between two shapes (such as "cube behind sphere") must be identified. We find that in some conditions, CLIP is able to learn attribute-object labellings, and to generalize to unseen attribute-object combinations. However, we also see evidence that CLIP is not able to bind features together reliably. Moreover, CLIP is not able to reliably learn relations between objects, whereas some compositional models are able to learn these perfectly. Of the five models we developed, none were able to generalize to unseen relations.
translated by 谷歌翻译
Large language models (LLMs) have exploded in popularity in the past few years and have achieved undeniably impressive results on benchmarks as varied as question answering and text summarization. We provide a simple new prompting strategy that leads to yet another supposedly "super-human" result, this time outperforming humans at common sense ethical reasoning (as measured by accuracy on a subset of the ETHICS dataset). Unfortunately, we find that relying on average performance to judge capabilities can be highly misleading. LLM errors differ systematically from human errors in ways that make it easy to craft adversarial examples, or even perturb existing examples to flip the output label. We also observe signs of inverse scaling with model size on some examples, and show that prompting models to "explain their reasoning" often leads to alarming justifications of unethical actions. Our results highlight how human-like performance does not necessarily imply human-like understanding or reasoning.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
词汇语义和认知科学指出,负担能力(即对反对支持的行为)对于理解和代表名词和动词至关重要。但是,对这些语义特征的研究尚未与当前主导语言表示研究的“基础”模型集成。我们假设随着时间的推移,对象状态的预测建模将导致“免费”编码对象负担信息的表示。我们训练神经网络在模拟交互中预测对象的轨迹,并表明我们网络的潜在表示区分了观察到的和未观察到的负担。我们发现,使用空间数据集中的3D模拟训练的模型优于传统的2D计算机视觉模型,该模型训练了类似任务,并且在初步检查时,概念之间的差异与预期功能相对应(例如,滚动需要旋转)。我们的结果提出了一种方法,即可以将现代深度学习方法与词汇表达的传统语义概念融合在一起。
translated by 谷歌翻译
分销模型从文本中学习了单词的表示,但由于缺乏基础或将文本与非语言世界的联系而受到批评。接地的语言模型在学习将名词和形容词等具体类别通过图像和视频连接到世界上的混凝土类别取得了成功,但是可以难以将动词本身的含义与通常发生的上下文隔离。在本文中,我们研究了自然编码动词语义的轨迹(即物体的位置和旋转)的程度。我们构建一个程序生成的代理 - 对象相互作用数据集,获取该数据中发生的动词的人体注释,并比较给定轨迹的几种表示学习方法。我们发现,轨迹与某些动词(例如秋季)相关联,并且通过自我监督预处理的额外抽象可以进一步捕获动词含义的细微差异(例如,滚动与幻灯片)。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
我们介绍了445名人员和计算机生成的文件的新型语料库,包括约27,000个条款,用于语义条款类型和相干关系,允许人工和自然话语模式的细节比较。该语料库涵盖了正式和非正式的话语,并包含使用微调GPT-2生成的文件(Zellers等,2019)和GPT-3(棕色等,2020)。我们通过提供初步证据,展示该语料库的有用性,通过提供初步证据,以提供较少,更短,更频繁的通电话条款关系与计算机生成的叙述和论点的较低质量相关。
translated by 谷歌翻译
深度检索模型广泛用于学习实体表示和建议。联合学习提供了一种隐私保留方式来培训这些模型,而无需集中用户数据。然而,由于非IID(独立和相同的和相同的和相同的分布式)培训数据,联邦深度检索模型通常比客户的培训数据更差,这是联邦学习的内在财产,限制了可用于培训的否定。我们证明,这个问题与常识的客户漂移问题不同。这项工作提出了批量不敏感的损失,作为缓解联合电影建议的非IID负面问题的一种方式。我们探讨各种技术,并确定批量不敏感损耗可以有效地提高联邦深度检索模型的性能,将联邦模型的相对召回增加高达93.15%,并减少了它与集中模型之间的相对差距和集中模型从27.22% - 43.14%至0.53% - 2.42%。我们还开源我们的代码框架,以加速联合深度检索模型的进一步研究和应用。
translated by 谷歌翻译